56 research outputs found

    Improving exploration in reinforcement learning with temporally correlated stochasticity

    Get PDF
    Reinforcement learning is a useful ap-proach to solve machine learning problems by self-exploration when training samples are not provided.However, researchers usually ignore the importance ofthe choice of exploration noise. In this paper, I showthat temporally self-correlated exploration stochastic-ity, generated by Ornstein-Uhlenbeck process, can sig-nificantly enhance the performance of reinforcementlearning tasks by improving exploration

    Self-organization of action hierarchy and compositionality by reinforcement learning with recurrent neural networks

    Get PDF
    Recurrent neural networks (RNNs) for reinforcement learning (RL) have shown distinct advantages, e.g., solving memory-dependent tasks and meta-learning. However, little effort has been spent on improving RNN architectures and on understanding the underlying neural mechanisms for performance gain. In this paper, we propose a novel, multiple-timescale, stochastic RNN for RL. Empirical results show that the network can autonomously learn to abstract sub-goals and can self-develop an action hierarchy using internal dynamics in a challenging continuous control task. Furthermore, we show that the self-developed compositionality of the network enhances faster re-learning when adapting to a new task that is a re-composition of previously learned sub-goals, than when starting from scratch. We also found that improved performance can be achieved when neural activities are subject to stochastic rather than deterministic dynamics

    Self-Organization of Action Hierarchy and Inferring Latent States in Deep Reinforcement Learning with Stochastic Recurrent Neural Networks

    Get PDF
    The thesis aims to advance cognitive decision-making and motor control using reinforcement learning (RL) with stochastic recurrent neural networks (RNNs). RL is a computational framework to train an agent, such as a robot, to select the actions that maximize immediate or future rewards. Recently, RL has undergone rapid development by introducing artificial neural networks as function approximators. RL using neural networks, also known as deep RL, have shown super-human performance on a wide range of virtual and real-world tasks, such as games, robotic control, and manipulating nuclear fusion devices. There would not be such a success without the efforts of numerous researchers who developed and improved the deep RL algorithms. In particular, most of the works focus on designing or revising the RL objective functions by mathematical analysis and heuristic ideas. While the well-formulated loss functions are critical to the RL performance, relatively fewer efforts have been paid to developing and improving the architecture of the neural network models used in deep RL. The thesis discusses the benefits of using novel network architectures for deep RL. In particular, the thesis includes two of the authors’ original studies about developing novel stochastic RNN architectures for RL in partially observable environments. The first work proposes a novel, multiple-level, stochastic RNN model for solving tasks that require hierarchical control. It is shown that an action hierarchy, characterized by consistent representation for abstracted sub-goals in the higher level, self-develops during the learning in several challenging continuous robotic control tasks. The emerged action hierarchy is also observed to enable faster relearning when the sub-goals are recomposed. The second work introduces a variational RNN model for predicting state transitions in continuous robotic control tasks in which the environmental state is partially observable. By predicting subsequent observations, the models learn to represent the underlying states of the environment that are indispensable but not observable. A corresponding algorithm is proposed to facilitate efficient learning in partially observable environments. The proposed studies suggest that the performance of RL agents can be improved by adequate usage of stochastic RNNs structures, which provides novel insights for designing better model architectures for future deep RL studies.Okinawa Institute of Science and Technology Graduate Universit

    Energy-Efficient Visual Search by Eye Movement and Low-Latency Spiking Neural Network

    Full text link
    Human vision incorporates non-uniform resolution retina, efficient eye movement strategy, and spiking neural network (SNN) to balance the requirements in visual field size, visual resolution, energy cost, and inference latency. These properties have inspired interest in developing human-like computer vision. However, existing models haven't fully incorporated the three features of human vision, and their learned eye movement strategies haven't been compared with human's strategy, making the models' behavior difficult to interpret. Here, we carry out experiments to examine human visual search behaviors and establish the first SNN-based visual search model. The model combines an artificial retina with spiking feature extraction, memory, and saccade decision modules, and it employs population coding for fast and efficient saccade decisions. The model can learn either a human-like or a near-optimal fixation strategy, outperform humans in search speed and accuracy, and achieve high energy efficiency through short saccade decision latency and sparse activation. It also suggests that the human search strategy is suboptimal in terms of search speed. Our work connects modeling of vision in neuroscience and machine learning and sheds light on developing more energy-efficient computer vision algorithms

    Variational Recurrent Models for Solving Partially Observable Control Tasks

    Get PDF
    In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy. In this study, we propose an RL algorithm for solving PO tasks. Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM. The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization. Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner.Comment: Published as a conference paper at the Eighth International Conference on Learning Representations (ICLR 2020

    Habits and goals in synergy: a variational Bayesian framework for behavior

    Full text link
    How to behave efficiently and flexibly is a central problem for understanding biological agents and creating intelligent embodied AI. It has been well known that behavior can be classified as two types: reward-maximizing habitual behavior, which is fast while inflexible; and goal-directed behavior, which is flexible while slow. Conventionally, habitual and goal-directed behaviors are considered handled by two distinct systems in the brain. Here, we propose to bridge the gap between the two behaviors, drawing on the principles of variational Bayesian theory. We incorporate both behaviors in one framework by introducing a Bayesian latent variable called "intention". The habitual behavior is generated by using prior distribution of intention, which is goal-less; and the goal-directed behavior is generated by the posterior distribution of intention, which is conditioned on the goal. Building on this idea, we present a novel Bayesian framework for modeling behaviors. Our proposed framework enables skill sharing between the two kinds of behaviors, and by leveraging the idea of predictive coding, it enables an agent to seamlessly generalize from habitual to goal-directed behavior without requiring additional training. The proposed framework suggests a fresh perspective for cognitive science and embodied AI, highlighting the potential for greater integration between habitual and goal-directed behaviors

    Variational Recurrent Models for Solving Partially Observable Control Tasks

    Get PDF
    In partially observable (PO) environments, deep reinforcement learning (RL) agents often suffer from unsatisfactory performance, since two problems need to be tackled together: how to extract information from the raw observations to solve the task, and how to improve the policy. In this study, we propose an RL algorithm for solving PO tasks. Our method comprises two parts: a variational recurrent model (VRM) for modeling the environment, and an RL controller that has access to both the environment and the VRM. The proposed algorithm was tested in two types of PO robotic control tasks, those in which either coordinates or velocities were not observable and those that require long-term memorization. Our experiments show that the proposed algorithm achieved better data efficiency and/or learned more optimal policy than other alternative approaches in tasks in which unobserved states cannot be inferred from raw observations in a simple manner
    • …
    corecore